-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable passing down dynamic dimensions from torch to XLA #5790
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change the title of PR to something more descriptive: something like "Enable passing down dynamic dimensions from torch to XLA" etc.
Also all C++ use of macro change that to env var like the python one. (so that no need to recompile to enable the feature)
torch_xla/csrc/helpers.cpp
Outdated
@@ -322,6 +329,116 @@ xla::XlaOp XlaHelpers::DynamicReshapeAs(xla::XlaOp input, | |||
: xla::Reshape(input, shape.dimensions()); | |||
} | |||
|
|||
#if EXPERIMENTAL_XLA_UNBOUNDED_DYNAMISM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not a fan of macro, if you want to guard these functions you can add a check like XLA_CHECK(getenv(EXPERIMENTAL_XLA_UNBOUNDED_DYNAMISM))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sg, will update
torch_xla/csrc/helpers.cpp
Outdated
get_dim_ops.push_back(xla::GetDimensionSize(aux_op, i)); | ||
|
||
auto s = ShapeHelper::ShapeOfXlaOp(get_dim_ops.back()); | ||
std::cout << "implicitB shape: " << xla::ShapeUtil::HumanString(s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
torch_xla/csrc/lowering_context.cpp
Outdated
xla::internal::XlaBuilderFriend builder_friend; | ||
auto* inst = builder_friend.GetInstruction(result_ops[0]); | ||
auto* mutable_dynamic = | ||
inst->mutable_shape()->mutable_is_dynamic_dimension(); | ||
if (mutable_dynamic->empty()) { | ||
for (int i = 0; i < inst->dimensions_size(); i++) { | ||
mutable_dynamic->Add(false); | ||
} | ||
} | ||
auto* mutable_dims = inst->mutable_shape()->mutable_dimensions(); | ||
for (const auto dim : casted->dynamic_dims()) { | ||
mutable_dynamic->Set(dim, true); | ||
mutable_dims->Set(dim, kUnboundedSize); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we always need to run these or we should guard this in a conditional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's guard it by checking if the XlaNode has dynamic dim
671c981
to
23ee18c
Compare
…d dynamism code path
48f931b
to
af14415
Compare
torch_xla/csrc/elementwise.cpp
Outdated
0, input_shape.element_type(), input.builder())); | ||
xla::XlaOp scalar = XlaHelpers::ScalarValue<float>( | ||
0, input_shape.element_type(), input.builder()); | ||
if (experimental_unbounded_dynamism) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this path execute in the non export path?
Wdyt we limit this experimental condition to torch.export path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per the discussion below, seems there is no better solution than using a env variable to enable the unbounded dynamism lowering. But we can put more thought on this one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SG. This issue doesn't block this PR. Let's prepare a proposal in parallel.
[](int64_t size) { return size == kUnboundedSize; }); | ||
} | ||
|
||
xla::XlaOp XlaHelpers::DynamicUnboundedReshape( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please prepare an opset issue to track supporting this mode of dynamism?
Similar work example: #5764
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I think we have a list internally
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it work to maintain the list on GH?
torch_xla/csrc/reduction.cpp
Outdated
@@ -31,6 +31,9 @@ struct SummationResult { | |||
xla::XlaOp result; | |||
}; | |||
|
|||
static const bool experimental_unbounded_dynamism = | |||
runtime::sys_util::GetEnvBool("EXPERIMENTAL_XLA_UNBOUNDED_DYNAMISM", false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, for adding env variables, please keep this file up to date.
https://github.com/pytorch/xla/blob/master/configuration.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current thought is to use env variable to limit the code path to "Export Only", as you've pointed out in many places. If there are better mechanisms to accomplish that we can also use something different.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @lsy323 - I left a few comments.
f2e65aa
to
5459950
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mostly lgtm, minor nits
e7401b3
to
f8927e6
Compare
f8927e6
to
07d2b43
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks.
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
This is based on #5778. This PR focuses on supporting unbounded dynamism in LTC infra. The e2e would work when the unbounded dynamism support on XLA side is merged, and required op lowering logic is updated in torch_xla.
EXPERIMENTAL_XLA_UNBOUNDED_DYNAMISM=1
Test:
Tested with @sdasgup3's XLA change to propagate unbounded dynamism branch. To make the patch works with torch_xla HEAD, the branch needs to be rebased to the pinned XLA version. Rebased branch
The unbounded dynamism can be propagated in the following example:
which gives